27 research outputs found

    Comparison of multiple machine learning algorithms for urban air quality forecasting

    Get PDF
    Environmental air pollution has become one of the major threats to human lives nowadays in developed and developing countries. Due to its importance, there exist various air pollution forecasting models, however, machine learning models proved one of the most efficient methods for prediction. In this paper, we assessed the ability of machine learning techniques to forecast NO2, SO2, and PM10 in Amman, Jordan. We compared multiple machine learning methods like artificial neural networks, support vector regression, decision tree regression, and extreme gradient boosting. We also investigated the effect of the pollution station and the meteorological station distance on the prediction result as well as explored the most relevant seasonal variables and the most important minimal set of features required for prediction to improve the prediction time. The experiments showed promising results for predicting air pollution in Amman with artificial neural network outperforming the other algorithms and scoring RMSE of 0.949 ppb, 0.451 ppb, and 5.570 µg/m3 for NO2, SO2, and PM10 respectively. Our results indicated that when the meteorological variables were obtained from the same pollution station the results were better. We were also able to reduce the time by reducing the set of variables required for prediction from 11 down to 3 and achieved major time improvement by about 80% for NO2, 92% for SO2, and 90% for PM10. The most important variables required for predicting NO2 were the previous day values of NO2, humidity and wind direction. While for SO2 they were the previous day values of SO2, temperature, and wind direction values of the previous day. Finally, for PM10 they were the previous day values of PM10, humidity, and day of the year

    Secure Federated Learning with a Homomorphic Encryption Model

    Get PDF
    Federated learning (FL) offers collaborative machine learning across decentralized devices while safeguarding data privacy. However, data security and privacy remain key concerns. This paper introduces "Secure Federated Learning with a Homomorphic Encryption Model," addressing these challenges by integrating homomorphic encryption into FL. The model starts by initializing a global machine learning model and generating a homomorphic encryption key pair, with the public key shared among FL participants. Using this public key, participants then collect, preprocess, and encrypt their local data. During FL Training Rounds, participants decrypt the global model, compute local updates on encrypted data, encrypt these updates, and securely send them to the aggregator. The aggregator homomorphic ally combines updates without revealing participant data, forwarding the encrypted aggregated update to the global model owner. The Global Model Update ensures the owner decrypts the aggregated update using the private key, updates the global model, encrypts it with the public key, and shares the encrypted global model with FL participants. With optional model evaluation, training can iterate for several rounds or until convergence. This model offers a robust solution to Florida data privacy and security issues, with versatile applications across domains. This paper presents core model components, advantages, and potential domain-specific implementations while making significant strides in addressing FL's data privacy concerns

    NaĂŻve Bayesian Classification Based Glioma Brain Tumor Segmentation Using Grey Level Co-occurrence Matrix Method

    Get PDF
    Brain tumors vary widely in size and form, making detection and diagnosis difficult. This study's main aim is to identify abnormal brain images., classify them from normal brain images, and then segment the tumor areas from the categorised brain images. In this study, we offer a technique based on the Nave Bayesian classification approach that can efficiently identify and segment brain tumors. Noises are identified and filtered out during the preprocessing phase of tumor identification. After preprocessing the brain image, GLCM and probabilistic properties are extracted. Naive Bayesian classifier is then used to train and label the retrieved features. When the tumors in a brain picture have been categorised, the watershed segmentation approach is used to isolate the tumors. This paper's brain pictures are from the BRATS 2015 data collection. The suggested approach has a classification rate of 99.2% for MR pictures of normal brain tissue and a rate of 97.3% for MR images of aberrant Glioma brain tissue. In this study, we provide a strategy for detecting and segmenting tumors that has a 97.54% Probability of Detection (POD), a 92.18% Probability of False Detection (POFD), a 98.17% Critical Success Index (CSI), and a 98.55% Percentage of Corrects (PC). The recommended Glioma brain tumour detection technique outperforms existing state-of-the-art approaches in POD, POFD, CSI, and PC because it can identify tumour locations in abnormal brain images

    A new model for large dataset dimensionality reduction based on teaching learning-based optimization and logistic regression

    Get PDF
    One of the human diseases with a high rate of mortality each year is breast cancer (BC). Among all the forms of cancer, BC is the commonest cause of death among women globally. Some of the effective ways of data classification are data mining and classification methods. These methods are particularly efficient in the medical field due to the presence of irrelevant and redundant attributes in medical datasets. Such redundant attributes are not needed to obtain an accurate estimation of disease diagnosis. Teaching learning-based optimization (TLBO) is a new metaheuristic that has been successfully applied to several intractable optimization problems in recent years. This paper presents the use of a multi-objective TLBO algorithm for the selection of feature subsets in automatic BC diagnosis. For the classification task in this work, the logistic regression (LR) method was deployed. From the results, the projected method produced better BC dataset classification accuracy (classified into malignant and benign). This result showed that the projected TLBO is an efficient features optimization technique for sustaining data-based decision-making systems

    Clinical Evaluation of Pit and Fissure Sealants Placed by Undergraduate Dental Students in 5-15 Years-old Children in Iraq

    Get PDF
    Objective: To clinically evaluate the retention and marginal discoloration of pit and fissure sealants applied to primary and permanent teeth. Material and Methods: The study population encompassed of 5-15 years- old children. After consenting, a light-curing sealant was applied to etched pits and fissures of occlusal surfaces of selected sound teeth. The retention rate and marginal discoloration were assessed, 3 months after application of the sealants based on the criteria proposed by Simonsen’s criteria (total retention: score 0, partial loss: score 1, and total loss: score 2). Each tooth was considered as an independent sample during analysis. Results: The achieved sample size was 43 children aged 5-15 years (mean age=10.0 years). Therefore, data of 100 teeth from 43 children were used for the final analysis. The percentage of completely retained sealants (59%) was higher than the percentage of partially retained sealants (23%) and completely missing sealants (18%) after 3 months follow up. Out of 100 sealed teeth, 60% were either had marginal discoloration or completely missing. Using the Mann-Whitney test, there was a statistically significant difference (p<0.05) between primary and permanent teeth in terms of retention. However, there was no statistical difference (p>0.05) between upper and lower teeth in terms of retention. Conclusion: The success rate of fissure sealants after 3 months follow-up was satisfactory

    Study the field of view influence on the monchromatic and polychromatic image quality of a human eye

    Get PDF
    In this paper, the effect of the eye field of view (known as F.O.V.) on the performance and quality of the image of the human eye is studied, analyzed, and presented in detail. The image quality of the retinal is numerically analyzed using the eye model of Liou and Brennan with this polymer contact lens. The images in digital form were collected from various sources such as photos, text structure, manuscripts, and graphics. These images were obtained from scanned documents or a scene. The color fringing, chromatic aberration addition to polychromatic effect was studied and analyzed. The Point Spreads Function (known as PSF) and The Modulation Transfers Function (known as MTF) were measured as the most appropriate measure of image quality. The calculations of the image quality were made by using Zemax software. Then, the calculation results demonstrate the value of correcting the chromatic aberration. The results presented in this paper showed that the image's form is so precise to the eye (F.O.V.). The image quality is degraded as (F.O.V.) increases due to the increment in spherical aberration and distortion aberration, respectively. In conclusion, the Zemax software used in this study assists the researcher's potential to design the human eye and correct the aberration by using external optics

    Cloud computing issues, challenges, and needs: A survey

    Get PDF
    Cloud computing represents a kind of computing that is based on the sharing of computing resources instead of possessing personal devices or local servers for handling several applications and tasks. This kind of computing includes three distinguished kinds of services provided remotely for clients that can be accessed by using the Internet. Typically, clients work on paying annual or monthly service fees for suppliers, in order to gain access to systems that work on delivering infrastructure as a service, platforms as a service, and software as a service for any subscriber. In this paper, the usefulness and the abuse of the cloud computing are briefly discussed and presented by highlighting the influences of cloud computing in different areas. Moreover, this paper also presents the kinds and services of cloud. In addition, the security issues that cover the cloud security solution requirements, and the cloud security issues, which is one of the biggest issues in recent years in cloud computing were presented in this paper. The security requirement that needs by the cloud computing covers privacy, lack of user control, unauthorized secondary usage, and finally data proliferation and data flow. Meanwhile, the security issues cover including ownership of device, the trust issue and legel aspects. To overcome the security issues, this paper also presents the solution at the end of this paper

    Telecardiology Application in Jordan: Its Impact on Diagnosis and Disease Management, Patients’ Quality of Life, and Time- and Cost-Savings

    Get PDF
    Objectives. To assess the impact of live interactive telecardiology on diagnosis and disease management, patients’ quality of life, and time- and cost-savings. Methods. All consecutive patients who attended or were referred to the teleclinics for suspected cardiac problems in two hospitals in remote areas of Jordan during the study period were included in the study. Patients were interviewed for relevant information and their quality of life was assessed during the first visit and 8 weeks after the last visit. Results. A total of 76 patients were included in this study. Final diagnosis and treatment plan were established as part of the telecardiology consultations in 71.1% and 77.3% of patients, respectively. Patients’ travel was avoided for 38 (50.0%) who were managed locally. The majority of patients perceived that the visit to the telecardiology clinic results in less travel time (96.1%), less waiting time (98.1%), and lower cost (100.0%). Telecardiology consultations resulted in an improvement in the quality of life after two months of the first visit. Conclusions. Telecardiology care in remote areas of Jordan would improve the access to health care, help to reach proper diagnosis and establish the treatment plan, and improve the quality of life

    Fuzzy Generalized Hebbian Algorithm for Large-Scale Intrusion Detection System

    Get PDF
    The huge number of irrelevant and redundant data used in building intrusion detection systems (IDS) is one of the common issues in network intrusion detection systems. This paper proposed the use of Fuzzy Generalized Hebbian Algorithm as a novel data reduction method to overcome this problem of data redundancy in IDS. Two methods for dimensionality reduction (GHA and Fuzzy GHA) were used and compared in this study. This allowed retaining the most relevant traffic data information from the network. Furthermore, the K Nearest Neighbor algorithm was applied for the classification of the test connections into 2 categories (attack or normal). The investigations were carried out on the KDDCUP ‘99 dataset and the results showed the Fuzzy GHA method to perform better than GHA in the detection of both U2R and DoS attacks

    Towards Artificial Intelligence-Based Cybersecurity: The Practices and ChatGPT Generated Ways to Combat Cybercrime

    No full text
    Today, cybersecurity is considered one of the most noteworthy topics that are circulated frequently among companies in order to protect their data from hacking operations. The emergence of cyberspace contributed to the growth of electronic systems. It is a virtual digital space through which interconnection is established between computers and smartphones connected within the Internet of Things environment. This space is critical in building a safe digital environment free of threats and cybercrime. It is only possible to make a digital environment with the presence of cyberspace, which contains modern technologies that make this environment safe and far from unauthorized individuals. Cybersecurity has a wide range of challenges and obstacles in performance, and it is difficult for companies to face them. In this report, the most significant practices, sound, and good strategies will be studied to stop cybercrime and make a digital environment that guarantees data transfers between electronic devices safely and without the presence of malicious software. This report concluded that the procedures provided by cybersecurity are required and must be taken care of and developed
    corecore